70 research outputs found

    An Analogue-Digital Model of Computation: Turing Machines with Physical Oracles

    Get PDF
    We introduce an abstract analogue-digital model of computation that couples Turing machines to oracles that are physical processes. Since any oracle has the potential to boost the computational power of a Turing machine, the effect on the power of the Turing machine of adding a physical process raises interesting questions. Do physical processes add significantly to the power of Turing machines; can they break the Turing Barrier? Does the power of the Turing machine vary with different physical processes? Specifically, here, we take a physical oracle to be a physical experiment, controlled by the Turing machine, that measures some physical quantity. There are three protocols of communication between the Turing machine and the oracle that simulate the types of error propagation common to analogue-digital devices, namely: infinite precision, unbounded precision, and fixed precision. These three types of precision introduce three variants of the physical oracle model. On fixing one archetypal experiment, we show how to classify the computational power of the three models by establishing the lower and upper bounds. Using new techniques and ideas about timing, we give a complete classification.info:eu-repo/semantics/publishedVersio

    Automating Vehicles by Deep Reinforcement Learning using Task Separation with Hill Climbing

    Full text link
    Within the context of autonomous driving a model-based reinforcement learning algorithm is proposed for the design of neural network-parameterized controllers. Classical model-based control methods, which include sampling- and lattice-based algorithms and model predictive control, suffer from the trade-off between model complexity and computational burden required for the online solution of expensive optimization or search problems at every short sampling time. To circumvent this trade-off, a 2-step procedure is motivated: first learning of a controller during offline training based on an arbitrarily complicated mathematical system model, before online fast feedforward evaluation of the trained controller. The contribution of this paper is the proposition of a simple gradient-free and model-based algorithm for deep reinforcement learning using task separation with hill climbing (TSHC). In particular, (i) simultaneous training on separate deterministic tasks with the purpose of encoding many motion primitives in a neural network, and (ii) the employment of maximally sparse rewards in combination with virtual velocity constraints (VVCs) in setpoint proximity are advocated.Comment: 10 pages, 6 figures, 1 tabl

    On the Bounds of Function Approximations

    Full text link
    Within machine learning, the subfield of Neural Architecture Search (NAS) has recently garnered research attention due to its ability to improve upon human-designed models. However, the computational requirements for finding an exact solution to this problem are often intractable, and the design of the search space still requires manual intervention. In this paper we attempt to establish a formalized framework from which we can better understand the computational bounds of NAS in relation to its search space. For this, we first reformulate the function approximation problem in terms of sequences of functions, and we call it the Function Approximation (FA) problem; then we show that it is computationally infeasible to devise a procedure that solves FA for all functions to zero error, regardless of the search space. We show also that such error will be minimal if a specific class of functions is present in the search space. Subsequently, we show that machine learning as a mathematical problem is a solution strategy for FA, albeit not an effective one, and further describe a stronger version of this approach: the Approximate Architectural Search Problem (a-ASP), which is the mathematical equivalent of NAS. We leverage the framework from this paper and results from the literature to describe the conditions under which a-ASP can potentially solve FA as well as an exhaustive search, but in polynomial time.Comment: Accepted as a full paper at ICANN 2019. The final, authenticated publication will be available at https://doi.org/10.1007/978-3-030-30487-4_3

    A Survey on Continuous Time Computations

    Full text link
    We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature

    Optimization hardness as transient chaos in an analog approach to constraint satisfaction

    Full text link
    Boolean satisfiability [1] (k-SAT) is one of the most studied optimization problems, as an efficient (that is, polynomial-time) solution to k-SAT (for k3k\geq 3) implies efficient solutions to a large number of hard optimization problems [2,3]. Here we propose a mapping of k-SAT into a deterministic continuous-time dynamical system with a unique correspondence between its attractors and the k-SAT solution clusters. We show that beyond a constraint density threshold, the analog trajectories become transiently chaotic [4-7], and the boundaries between the basins of attraction [8] of the solution clusters become fractal [7-9], signaling the appearance of optimization hardness [10]. Analytical arguments and simulations indicate that the system always finds solutions for satisfiable formulae even in the frozen regimes of random 3-SAT [11] and of locked occupation problems [12] (considered among the hardest algorithmic benchmarks); a property partly due to the system's hyperbolic [4,13] character. The system finds solutions in polynomial continuous-time, however, at the expense of exponential fluctuations in its energy function.Comment: 27 pages, 14 figure

    Non-linear Autoregressive Neural Networks to Forecast Short-Term Solar Radiation for Photovoltaic Energy Predictions

    Get PDF
    Nowadays, green energy is considered as a viable solution to hinder CO2 emissions and greenhouse effects. Indeed, it is expected that Renewable Energy Sources (RES) will cover 40% of the total energy request by 2040. This will move forward decentralized and cooperative power distribution systems also called smart grids. Among RES, solar energy will play a crucial role. However, reliable models and tools are needed to forecast and estimate with a good accuracy the renewable energy production in short-term time periods. These tools will unlock new services for smart grid management. In this paper, we propose an innovative methodology for implementing two different non-linear autoregressive neural networks to forecast Global Horizontal Solar Irradiance (GHI) in short-term time periods (i.e. from future 15 to 120min). Both neural networks have been implemented, trained and validated exploiting a dataset consisting of four years of solar radiation values collected by a real weather station. We also present the experimental results discussing and comparing the accuracy of both neural networks. Then, the resulting GHI forecast is given as input to a Photovoltaic simulator to predict energy production in short-term time periods. Finally, we present the results of this Photovoltaic energy estimation discussing also their accuracy

    Functional kinds: a skeptical look

    Get PDF
    The functionalist approach to kinds has suffered recently due to its association with law-based approaches to induction and explanation. Philosophers of science increasingly view nomological approaches as inappropriate for the special sciences like psychology and biology, which has led to a surge of interest in approaches to natural kinds that are more obviously compatible with mechanistic and model-based methods, especially homeostatic property cluster theory. But can the functionalist approach to kinds be weaned off its dependency on laws? Dan Weiskopf has recently offered a reboot of the functionalist program by replacing its nomological commitments with a model-based approach more closely derived from practice in psychology. Roughly, Weiskopf holds that the natural kinds of psychology will be the functional properties that feature in many empirically successful cognitive models, and that those properties need not be localized to parts of an underlying mechanism. I here skeptically examine the three modeling practices that Weiskopf thinks introduce such non-localizable properties: fictionalization, reification, and functional abstraction. In each case, I argue that recognizing functional properties introduced by these practices as autonomous kinds comes at clear cost to those explanations’ counterfactual explanatory power. At each step, a tempting functionalist response is parochialism: to hold that the false or omitted counterfactuals fall outside the modeler’s explanatory aims, and so should not be counted against functional kinds. I conclude by noting the dangers this attitude poses to scientific disagreement, inviting functionalists to better articulate how the individuation conditions for functional kinds might outstrip the perspective of a single modeler

    Star-forming cores embedded in a massive cold clump: Fragmentation, collapse and energetic outflows

    Full text link
    The fate of massive cold clumps, their internal structure and collapse need to be characterised to understand the initial conditions for the formation of high-mass stars, stellar systems, and the origin of associations and clusters. We explore the onset of star formation in the 75 M_sun SMM1 clump in the region ISOSS J18364-0221 using infrared and (sub-)millimetre observations including interferometry. This contracting clump has fragmented into two compact cores SMM1 North and South of 0.05 pc radius, having masses of 15 and 10 M_sun, and luminosities of 20 and 180 L_sun. SMM1 South harbours a source traced at 24 and 70um, drives an energetic molecular outflow, and appears supersonically turbulent at the core centre. SMM1 North has no infrared counterparts and shows lower levels of turbulence, but also drives an outflow. Both outflows appear collimated and parsec-scale near-infrared features probably trace the outflow-powering jets. We derived mass outflow rates of at least 4E-5 M_sun/yr and outflow timescales of less than 1E4 yr. Our HCN(1-0) modelling for SMM1 South yielded an infall velocity of 0.14 km/s and an estimated mass infall rate of 3E-5 M_sun/yr. Both cores may harbour seeds of intermediate- or high-mass stars. We compare the derived core properties with recent simulations of massive core collapse. They are consistent with the very early stages dominated by accretion luminosity.Comment: Accepted for publication in ApJ, 14 pages, 7 figure

    Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Get PDF
    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons
    corecore